16 research outputs found

    Splittings, robustness, and structure of complete sets

    Get PDF

    Discovering Motifs in Real-World Social Networks

    Get PDF
    We built a framework for analyzing the contents of large social networks, based on the approximate counting technique developed by Gonen and Shavitt. Our toolbox was used on data from a large forum---\texttt{boards.ie}---the most prominent community website in Ireland. For the purpose of this experiment, we were granted access to 10 years of forum data. This is the first time the approximate counting technique is tested on real-world, social network data

    Using autoreducibility to separate complexity classes

    Get PDF

    Separating complexity classes using autoreducibility

    Get PDF

    Approximation algorithms and hardness of approximation for knapsack problems

    Get PDF
    We show various hardness of approximation algorithms for knapsack and related problems; in particular we will show that unless the Exponential-Time Hypothesis is false, then subset-sum cannot be approximated any better than with an FPTAS. We also give a simple new algorithm for approximating knapsack and subset-sum, that can be adapted to work for small space, or in small parallel time. Finally, we prove that knapsack can not be solved in Mulmuley's parallel PRAM model, even when the input is restricted to small bit-length

    Sparse Selfreducible Sets and Nonuniform Lower Bounds

    Get PDF
    It is well-known that the class of sets that can be computed by polynomial size circuits is equal to the class of sets that are polynomial time reducible to a sparse set. It is widely believed, but unfortunately up to now unproven, that there are sets in (Formula presented.), or even in (Formula presented.) that are not computable by polynomial size circuits and hence are not reducible to a sparse set. In this paper we study this question in a more restricted setting: what is the computational complexity of sparse sets that are selfreducible? It follows from earlier work of Lozano and Torán (in: Mathematical systems theory, 1991) that (Formula presented.) does not have sparse selfreducible hard sets. We define a natural version of selfreduction, tree-selfreducibility, and show that (Formula presented.) does not have sparse tree-selfreducible hard sets. We also construct an oracle relative to which all of (Formula presented.) is reducible to a sparse tree-selfreducible set. These lower bounds are corollaries of more general results about the computational complexity of sparse sets that are selfreducible, and can be interpreted as super-polynomial circuit lower bounds for (Formula presented.)

    Study on Relationship Between Individual Work Value and Work Performance of Civil Servants-Based on the Research in China

    Get PDF
    Civil servants are the key power to government’s development and social demands. However, present large amount of researches on public HR management mainly study from macro fields such as policy, education and team setup, the study of civil servants’ inner factors and outer work performance is comparatively much less, while the demonstrative study is the least. This paper proposes research hypothesis of civil servants’ work value and work performance on the basis of literature review; with the design of questionnaire, statistics analysis and research, finds the influencing factors and reasons why individual work value affects work performance; gets the specific influencing degree the parameters affect work performance. On the basis of demonstrative study, this paper proposes applicable methods and suggestions for civil servants’ work value on individual work performance management in administration departments, suggestions especially for HR management, screening and selection, stimulating, training and development.Key words: Civil servant; Work value; Work performanc

    Nonapproximability of the Normalized Information Distance

    No full text
    Normalized information distance (NID) uses the theoretical notion of Kolmogorov complexity, which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. This practical application is called `normalized compression distance' and it is trivially computable. It is a parameter-free similarity measure based on compression, and is used in pattern recognition, data mining, phylogeny, clustering, and classification. The complexity properties of its theoretical precursor, the NID, have been open. We show that the NID is neither upper semicomputable nor lower semicomputable up to any reasonable precision
    corecore